fairness and bias
Fairness and Bias in Multimodal AI: A Survey
Adewumi, Tosin, Alkhaled, Lama, Gurung, Namrata, van Boven, Goya, Pagliai, Irene
The importance of addressing fairness and bias in artificial intelligence (AI) systems cannot be over-emphasized. Mainstream media has been awashed with news of incidents around stereotypes and bias in many of these systems in recent years. In this survey, we fill a gap with regards to the minimal study of fairness and bias in Large Multimodal Models (LMMs) compared to Large Language Models (LLMs), providing 50 examples of datasets and models along with the challenges affecting them; we identify a new category of quantifying bias (preuse), in addition to the two well-known ones in the literature: intrinsic and extrinsic; we critically discuss the various ways researchers are addressing these challenges. Our method involved two slightly different search queries on Google Scholar, which revealed that 33,400 and 538,000 links are the results for the terms "Fairness and bias in Large Multimodal Models" and "Fairness and bias in Large Language Models", respectively. We believe this work contributes to filling this gap and providing insight to researchers and other stakeholders on ways to address the challenge of fairness and bias in multimodal A!.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Germany > Lower Saxony > Gottingen (0.14)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (20 more...)
Fairness and Bias in Algorithmic Hiring
Fabris, Alessandro, Baranowska, Nina, Dennis, Matthew J., Hacker, Philipp, Saldivar, Jorge, Borgesius, Frederik Zuiderveen, Biega, Asia J.
Employers are adopting algorithmic hiring technology throughout the recruitment pipeline. Algorithmic fairness is especially applicable in this domain due to its high stakes and structural inequalities. Unfortunately, most work in this space provides partial treatment, often constrained by two competing narratives, optimistically focused on replacing biased recruiter decisions or pessimistically pointing to the automation of discrimination. Whether, and more importantly what types of, algorithmic hiring can be less biased and more beneficial to society than low-tech alternatives currently remains unanswered, to the detriment of trustworthiness. This multidisciplinary survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness. Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Singapore (0.04)
- (41 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (0.92)
- Overview (0.92)
- Law > Labor & Employment Law (1.00)
- Law > Criminal Law (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- (9 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Communications > Social Media (1.00)
- (7 more...)
- Professional Services (0.53)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.40)
The nuanced debate over AI ethics
"You won't see many people with my background talking about ethics," said Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech at the global consulting company. A computer scientist who worked as a database and SQL developer and held data science- and AI-related technology roles at Bank of America, GE and Hewlett Packard before joining Deloitte in 2019, Ammanath wasn't always gung-ho to talk AI ethics. Then she decided to write a book about it. "There has arguably never been a more exciting time in AI," she wrote in her book "Trustworthy AI." "Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight." Protocol spoke with Ammanath about why ethical AI practices should be part of every employee's training, the limitations of providing internal guidance inside a sprawling consultancy and why she finally gave in and joined the AI ethics conversation.
Trustworthy AI: How to ensure trust and ethics in AI
Did you miss a session at the Data Summit? A pragmatic and direct approach to ethics and trust in artificial intelligence (AI) -- who would not want that? This is how Beena Ammanath describes her new book, Trustworthy AI. Ammanath is the executive director of the Global Deloitte AI Institute. She has had stints at GE, HPE and Bank of America, in roles such as vice president of data science and innovation, CTO of artificial intelligence and lead of data and analytics.
Global Big Data Conference
When hiring, many organizations use artificial intelligence tools to scan resumes and predict job-relevant skills. Colleges and universities use AI to automatically score essays, process transcripts and review extracurricular activities to predetermine who is likely to be a "good student." With so many unique use-cases, it is important to ask: can AI tools ever be truly unbiased decision-makers? In response to claims of unfairness and bias in tools used in hiring, college admissions, predictive policing, health interventions, and more, the University of Minnesota recently developed a new set of auditing guidelines for AI tools. The auditing guidelines, published in the American Psychologist, were developed by Richard Landers, associate professor of psychology at the University of Minnesota, and Tara Behrend from Purdue University.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
Meaningful Standards for Auditing High-Stakes Artificial Intelligence
When hiring, many organizations use artificial intelligence tools to scan resumes and predict job-relevant skills. Colleges and universities use AI to automatically score essays, process transcripts and review extracurricular activities to predetermine who is likely to be a "good student." With so many unique use-cases, it is important to ask: can AI tools ever be truly unbiased decision-makers? In response to claims of unfairness and bias in tools used in hiring, college admissions, predictive policing, health interventions, and more, the University of Minnesota recently developed a new set of auditing guidelines for AI tools. The auditing guidelines, published in the American Psychologist, were developed by Richard Landers, associate professor of psychology at the University of Minnesota, and Tara Behrend from Purdue University.
Fairness and bias in artificial intelligence - Big Data and AI Toronto
We can also make the algorithm set a different threshold for different sub-groups, making it easier or harder for a subgroup to get an optimistic prediction. Adopting such a technique will help offset the historical disadvantage one subgroup has over the other. Dr. Thomas shared one last thought with the audience. "You need to ensure fairness in your AI models. Don't assume that the algorithm is just going to do it properly, that the data is probably clean enough, or that it's not going to matter that much. Everyone needs to solve this very important problem by taking active measures to assess whether your model's predictions are fair and to fix it using some of these techniques."
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
The machine learning hype is real, but enterprises are still in the early adoption stages
Enterprise machine learning adoption has reached a stage at which the approach, utilisation and distinct job titles are being debated and implemented. And as machine learning becomes more widely accepted across industries, we're only at the tip of the iceberg when it comes to how companies are embracing this technology. Machine learning is typically used to enable some level of automation. From a rule-based fraud detection system to a complex model that automatically learns from examples. Machine learning for an organisation is only valuable when there are benefits, which could be anything from improving decision making to increasing revenue or engagement among employees or customers.
It's Still Early Days for Machine Learning Adoption
Despite the hype surrounding artificial intelligence, we're still in the early stages of adopting machine learning in the enterprise, according to a new survey released today by O'Reilly Media. The survey also found that large-scale production deep learning rarely happens on the cloud, and that companies pursuing machine learning are actively embracing privacy, security, and fairness. Nearly half (49%) of the 11,400 people who took O'Reilly's survey this June indicated they were in the exploration phase of machine learning and have not deployed any machine learning models into production. That compares to 36% of who said they were an early adopter (models in production from two to five years), while 15% considered themselves sophisticated users (models in production for more than five years). "A lot of people are very interested in machine learning, but a lot of them are in the getting-started phase in terms of actually putting these things into productions in products and services," O'Reilly's Chief Data Scientist Ben Lorica tells Datanami.
- Oceania (0.05)
- North America > United States > New York (0.05)
- Europe > Western Europe (0.05)
- Asia > East Asia (0.05)